Two distinct design philosophies currently define how autonomous intelligent systems engage with the world: AI agents and agentic AI. The terms are similar, but they represent technologies with different attributes. The purpose of this introductory post is to distinguish between AI agents and agentic AI, introduce use cases from business and academia, and explore just a few of the ethical and practical consequences of implementing agentic systems. At the end I'll leave you with lots of resources that I am finding helpful in navigating this topic.
I am trying to connect to work SAS server from R, to extract datasets for further processing/analysis in R. Due to corporate constraints, the only solution available is SAS JDBC type of connection, using IOM server subprotocol. I have been trying to make this work with our IT and SAS support on and off for months now. I had them install the SAS JDBC drivers on my machine, and via trial and error arrived at the below piece of R code to establish the connection: r library(RJDBC)
library(rJava)
.jinit()
driver <- JDBC(driverClass = "com.sas.rio.MVADriver")
conn <- dbConnect(
driver,
"jdbc:sasiom://xxx.xxx.xxx:12345",
"user",
"pwd"
)
running the above code I am able to create the driver object running JDBC() function, however, dbConnect throws the below exception: r Error in dbConnect(driver, "jdbc:sasiom://xxx.xxx.xxx:12345", :
Unable to connect JDBC to jdbc:sasiom://xxx.xxx.xxx:12345
JDBC ERROR: com/sas/util/ChainedExceptionInterface Interestingly, using the dbConnect function with a modified connection string, r conn <- dbConnect(
driver,
"jdbc:sasiom://xxx.xxx.xxx:12345?user=user&password=pwd"
)
gives a different error: r Error in dbConnect(driver, "jdbc:sasiom://xxx") :
Unable to connect JDBC to jdbc:sasiom://xxx
JDBC ERROR: com/sas/util/RBBase Unfortunately neither of the error messages don't give too much to work with. We have tried experimenting with different Java versions, and I have tripple checked that all required JAR files are accessible via CLASSPATH. I have extensively researched the internet and asked LLMs to no avail, and so now my last hope is the community hivemind power.. java version "1.8.0_60" SAS version 9.4 Grid M8 JDBC drivers 9.4
... View more
Agentic AI is changing how work happens. Not in sweeping, cinematic moments, but in steady shifts. A task that once needed a person now begins with an agent. A workflow that once relied on a team now relies on coordination. A decision that once waited for an analyst now arrives faster, clearer, and framed with evidence.
For data practitioners, understanding agentic AI is no longer optional. It is the next chapter of data work. And like any good chapter, it helps to know the beats before you turn the page.
Begin with purpose, not technology
Every agent begins with a purpose. This sounds simple. Yet in practice it is where most systems drift. A well-framed purpose shapes behaviour, boundaries, escalation paths, and the definition of done. Without that purpose, agents wander. They do too much, or too little, or the wrong thing entirely. The strongest practitioners treat the purpose statement like a compass. Set it cleanly and everything downstream becomes easier to govern.
Context gives an agent its world
Agents act well only when they understand the scene they are walking into. Context is not a luxury. It is the map. It includes data quality, metadata, lineage, domain constraints, policies, and risks. It includes what the agent must avoid as much as what it must pursue. When context is weak, the agent’s world collapses into guesswork. When context is strong, the agent becomes precise, grounded, and trustworthy. The craft lies not in feeding it more data, but in feeding it the right data at the right moment.
Break the work into scenes
Agentic workflows thrive on crisp task boundaries. A good practitioner thinks like a storyteller breaking a narrative into scenes. Each scene should be small, testable, and reversible. It should have a clear transition to the next step. When teams fail here, they create agents that try to do everything in one breath. When they succeed, they create modular systems that can be monitored, improved, and swapped out without disrupting the whole plot.
Autonomy is earned, not granted
Many teams want full autonomy on day one. It rarely works. The safer path is staged autonomy. First assistive. Then semi-autonomous. Then supervised autonomy where the agent can handle more, but stays inside a visible frame. This progression teaches the organisation what freedom it can genuinely sustain. It also teaches the agent what the organisation can tolerate. Mature teams treat autonomy as a sliding scale, not a switch.
Memory must be intentional
Memory is not just a feature. It is a governance decision. What an agent remembers, for how long, and for what purpose creates trust or erodes it. Some roles need short memory: a single session. Others need project-level memory. A few need domain-level memory that evolves over time. But memory must be bounded. Practitioners must define retention, permissioning, redaction, and auditability. Without these rules, memory becomes both a liability and a mystery.
Coordination beats brute force
The future of agentic AI is not a single brilliant agent. It is a cast. Planners, designers, evaluators, testers, critics, and reporters. Each contributes a different strength. The real magic lies in the choreography: who leads, who hands off, who verifies, and who closes the loop. Most of the value comes not from capability, but from coordination. Teams that master multi-agent orchestration unlock speed and quality that no single model can deliver alone.
Evaluation is where the craft hides
It is tempting to treat evaluation as a safeguard. Something you do at the end. Something a person checks before approving the output. Yet in agentic AI, evaluation is a design discipline. Practitioners must shape feedback loops, scoring criteria, escalation triggers, and checkpoints. These mechanisms teach the agent what good looks like. They also teach the organisation what to expect. The most resilient systems weave evaluation throughout the workflow, not at the edges.
Governance must travel with the agent
Agentic AI cannot rely on governance that sits outside the system. Agents need to carry their own explainability. Logs, rationales, structured evidence, and traceable paths should accompany their actions. This is not bureaucracy. It is how you build trust at scale. When stakeholders can see how the agent arrived at a decision, they lean in rather than pull back. Governance is not a separate layer. It is part of the agent’s identity.
Know the failure modes
Agents fail in predictable ways. They hallucinate. They become over-confident. They loop. They optimise for the wrong reward. They cling to outdated context. These patterns repeat across industries. Practitioners who map them early can design sharper guardrails. Clearer prompts. Better tests. Faster shutdown paths. Stronger fallback mechanisms. The discipline is not in eliminating failure. It is in recognising it quickly and recovering with grace.
The story ends in orchestration
The real story of agentic AI is orchestration. Agents do not sit in isolation. They sit inside business processes, data products, API calls, dashboards, decisions, and human judgement. The practitioner’s role is to weave these elements into a cohesive whole. To design systems where agents amplify human skill rather than replace it. To create workflows where machine autonomy and human insight lift each other. When orchestration is done well, the system becomes more than the sum of its parts. It becomes a new way of working.
The long view on agents
Agentic AI is not a technical trend. It is a shift in how organisations think about work, decision-making, and value. For data practitioners, this shift calls for a blend of craft, narrative sense, and operational discipline. Those who learn to frame purpose, model context, shape tasks, govern behaviour, and orchestrate collaboration will lead the next chapter. Because in the end, agentic AI is not only about what agents can do. It is about what organisations can become when they learn to work well with them.
... View more
Dear friends of SAS community I am confused by how to output the corrected original data set in covariance anlysis using proc mixed. I need the corrected original data for further repeated meassure and factorial analysis, as I carried out the covariance anlysis using one way treatment design, but the experiment is carried out as factorial design with repeated meassure. I believe a slight change of my code should do it. here is my code below: ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- data exercise; input trt$ adg iwt; /*iwt is the covariance*/ cards; hfhs 23.21 38.92 hfhs 1.18 39.65 hfhs 8.09 38.8 hfhs 29.84 39.7 hfhs 89.89 38.45 hfhs 5.86 39.61 hfhs 1.67 39.1 hfhs 88.98 38.77 hfhs null null hfhs 21.88 39.42 hfcon 95.81 37.76 hfcon 97.89 37.68 hfcon 97.88 38.14 hfcon 99.67 38.08 hfcon 99.58 37.42 hfcon 97.63 37.93 hfcon 96.74 38.32 hfcon 97.15 38.1 hfcon 91.10 38.26 lfhs 96.03 39.13 lfhs 59.53 38.98 lfhs 7.74 38.71 lfhs 9.92 38.8 lfhs 93.85 38.44 lfhs 6.79 38.84 lfhs 94.77 38.16 lfhs 6.37 39.18 lfhs 92.64 38.31 lfhs 16.74 38.64 lfcon 95.05 37.7 lfcon 97.29 38.46 lfcon 94.11 38.7 lfcon 98.08 38.88 lfcon 99.69 38.16 lfcon 99.62 38.11 lfcon 82.47 38.44 lfcon 98.20 37.94 lfcon 99.66 38.24 lfcon 98.52 37.95 ; proc mixed data=exercise covtest;/*check if covariance is significant, the answer is yes*/ class trt; model adg=trt trt*iwt/noint solution; run; proc mixed data=exercise covtest;/*check if equal slope or not, the answer is yes*/ class trt; model adg=trt iwt trt*iwt/solution; run; proc mixed data=exercise covtest;/*apply covariacne correction*/ class trt; model adg=trt iwt/solution ; lsmeans trt/adjust=tukey; ods output diffs=ppp lsmeans=mmm; ods listing exclude diffs lsmeans; run; %include'd:\pdmix800.sas'; %pdmix800(ppp,mmm,alpha=.05,sort=yes);/*apply sas macro*/ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- I sincerely appreciate all the assistance you are about to offer!
... View more
Hello
I run code that create data sets in following order.
Why the data sets appear in different order then order they were created?
%let h1=2508; /**YYMM structure**/
%let h2=2507;
%let h3=2506;
%let h4=2505;
%let h5=2504;
%let h6=2503;
%macro Help_Macro_a;
%do j=2 %to 6;
proc sql;
create table _L_CS_&&h&j. as
select b.lakoach_y as lakoach_Y&h1.,a.*
from L_CS_&&h&j.(Rename=(lakoach_y=lakoach_y_&&h&j.)) as a
inner join L_CS_&h1. as b
on a.lakoach=b.lakoach
;
quit;
%end;
%mend Help_Macro_a;
%Help_Macro_a
... View more
Walk in ready to learn. Walk out ready to deliver. This is the data and AI conference you can't afford to miss. Register now and lock in 2025 pricing—just $495!